Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 20, 2026
-
Lighting consumes 10% of the global electricity. White laser lighting, utilizing either direct color mixing or phosphor-conversion, can potentially boost the efficiency well beyond existing light emitting diodes (LEDs) especially at high current density. Here we present a compact, universal packaging platform for both laser lighting schemes, which is simultaneously scalable to wavelength division multiplexing for visible light communication. Using commercially available laser diodes and optical components, low-speckle contrast ≤ 5% and uniform illumination is achieved by multi-stage scattering and photon recycling through a mixing rod and static diffusers in a truncated-pyramidal reflective cavity. We demonstrate a high luminous efficacy of 274 lm/Wofor phosphor-converted laser lighting and 150 lm/Wofor direct red-green-blue laser mixing, respectively, in reference to the input optical power. In the former case, the luminous efficacy achieved for practical lighting is even higher than most of the previous reports measured using integrating spheres. In the latter case of direct laser color mixing, to our best knowledge, this is the first time to achieve a luminous efficacy approaching their phosphor-conversion counterparts in a compact package applicable to practical lighting. With future improvement of blue laser diode efficiency and development of yellow/amber/orange laser diodes, we envision that this universal white laser package can potentially achieve a luminous efficacy > 275 lm/Wein reference to the input electrical power, ~ 1.5x higher than state-of-the-art LED lighting and exceeding the target of 249 lm/Wefor 2035.more » « less
-
Visual tags (e.g., barcodes, QR codes) are ubiquitous in modern day life, though they rely on obtrusive geometric patterns to encode data, degrading the overall user experience. We propose a new paradigm of passive visual tags which utilizes light polarization to imperceptibly encode data using cheap, widely-available components. The tag and its data can be extracted from background scenery using off-the-shelf cameras with inexpensive LCD shutters attached atop camera lenses. We examine the feasibility of this design with real-world experiments. Initial results show zero bit errors at distances up to 3.0~m, an angular-detection range of \ang110, and robustness to manifold ambient light and occlusion scenarios.more » « less
-
Face touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing monitoring systems only capture objects approaching the face, rather than detecting actual touches. As such, these systems are prone to false positives upon hand or object movement in proximity to one's face (e.g., picking up a phone). We present FaceSense, an ear-worn system capable of identifying actual touches and differentiating them between sensitive/mucosal areas from other facial areas. Following a multimodal approach, FaceSense integrates low-resolution thermal images and physiological signals. Thermal sensors sense the thermal infrared signal emitted by an approaching hand, while physiological sensors monitor impedance changes caused by skin deformation during a touch. Processed thermal and physiological signals are fed into a deep learning model (TouchNet) to detect touches and identify the facial zone of the touch. We fabricated prototypes using off-the-shelf hardware and conducted experiments with 14 participants while they perform various daily activities (e.g., drinking, talking). Results show a macro-F1-score of 83.4% for touch detection with leave-one-user-out cross-validation and a macro-F1-score of 90.1% for touch zone identification with a personalized model.more » « less
An official website of the United States government
